Finite horizon control of processing networks via fluid approach: Separated continuous linear programs, infinite virtual buffers and maximum pressure policies
نویسنده
چکیده
We consider systems in which many items are evolving over time by sharing common resources, and the problem of how to control such systems by allocating resources to various activities which schedule, route and process these items. We represent this by a processing network as defined by Harrison, with the added feature of infinite virtual buffers, which can model exogenous input and output. We address the problem of transient control of such processing networks over a finite time horizon. We use a fluid approach, in which we approximate the processing network by a deterministic continuous linear fluid model and formulate its optimization as a separated continuous linear program. This can be solved by a new simplex type algorithm of Weiss, and the solution consists of piecewise constant allocations of the activities, with a finite number of breakpints. This optimal fluid solution is then used to control the original system. We use the maximum pressure policy of Dai and Lin to track the optimal fluid solution. We prove asymptotic optimality of this fluid approach: As the numbers of items in the system and the processing rates increase, the scaled system converges almost surely to the optimal fluid solution, and the scaled objective value converges to the optimum of the system.
منابع مشابه
Near optimal control of queueing networks over a finite time horizon
We propose a method for the control of multi-class queueing networks over a finite time horizon. We approximate the multi-class queueing network by a fluid network and formulate a fluid optimization problem which we solve as a separated continuous linear program. The optimal fluid solution partitions the time horizon to intervals in which constant fluid flow rates are maintained. We then use a ...
متن کاملAn Eigenvalue Approach to Infinite-horizon Optimal Control
A method for finding optimal control policies for first order state-constrained, stochastic dynamic systems in continuous time is presented. The method relies on solution of the Hamilton-Jacobi-Bellman equation, which includes a diffusion term related to the stochastic disturbance in the model. A variable transformation is applied that turns the infinite-horizon optimal control problem into a l...
متن کاملOptimal Finite-time Control of Positive Linear Discrete-time Systems
This paper considers solving optimization problem for linear discrete time systems such that closed-loop discrete-time system is positive (i.e., all of its state variables have non-negative values) and also finite-time stable. For this purpose, by considering a quadratic cost function, an optimal controller is designed such that in addition to minimizing the cost function, the positivity proper...
متن کاملA New Approach for Approximating Solution of Continuous Semi-Infinite Linear Programming
This paper describes a new optimization method for solving continuous semi-infinite linear problems. With regard to the dual properties, the problem is presented as a measure theoretical optimization problem, in which the existence of the solution is guaranteed. Then, on the basis of the atomic measure properties, a computation method was presented for obtaining the near optimal so...
متن کاملOptimal Job Sequencing Control in a Benchmark Reentrant Line with Finite Capacity Buffers
This paper presents the optimality condition, Bellman’s equation, and an optimal job sequencing policy for a benchmark reentrant line (RL) with finite capacity buffers. The optimal policy is obtained for an infinite horizon discounted cost criterion, and is characterized by an index which is a function of the one-stage cost function and system’s parameters. In addition, when the buffer levels a...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2005